215 research outputs found

    Clustering before training large datasets - Case study: K-SVD

    Get PDF
    Training and using overcomplete dictionaries has been the subject of many developments in the area of signal processing and sparse representations. The main idea is to train a dictionary that is able to achieve good sparse representations of the items contained in a given dataset. The most popular approach is the K-SVD algorithm and in this paper we study its application to large datasets. The main interest is to speedup the training procedure while keeping the representation errors close to some specific values. This goal is reached by using a clustering procedure, called here T-mindot, which reduces the size of the dataset but keeps the most representative data items and a measure of their importance. Experimental simulations compare the running times and representation errors of the training method with and without the clustering procedure and they clearly show how effective T-mindot is

    Classification of music genres using sparse representations in overcomplete dictionaries

    Get PDF
    This paper presents a simple, but efficient and robust, method for music genre classification that utilizes sparse representations in overcomplete dictionaries. The training step involves creating dictionaries, using the K-SVD algorithm, in which data corresponding to a particular music genre has a sparse representation. In the classification step, the Orthogonal Matching Pursuit (OMP) algorithm is used to separate feature vectors that consist only of Linear Predictive Coding (LPC) coefficients. The paper analyses in detail a popular case study from the literature, the ISMIR 2004 database. Using the presented method, the correct classification percentage of the 6 music genres is 85.59, result that is comparable with the best results published so far

    Block Orthonormal Overcomplete Dictionary Learning

    Get PDF
    In the field of sparse representations, the overcomplete dictionary learning problem is of crucial importance and has a growing application pool where it is used. In this paper we present an iterative dictionary learning algorithm based on the singular value decomposition that efficiently construct unions of orthonormal bases. The important innovation described in this paper, that affects positively the running time of the learning procedures, is the way in which the sparse representations are computed - data are reconstructed in a single orthonormal base, avoiding slow sparse approximation algorithms - how the bases in the union are used and updated individually and how the union itself is expanded by looking at the worst reconstructed data items. The numerical experiments show conclusively the speedup induced by our method when compared to previous works, for the same target representation error

    Estimation of Scribble Placement for Painting Colorization

    Get PDF
    Image colorization has been a topic of interest since the mid 70’s and several algorithms have been proposed that given a grayscale image and color scribbles (hints) produce a colorized image. Recently, this approach has been introduced in the field of art conservation and cultural heritage, where B&W photographs of paintings at previous stages have been colorized. However, the questions of what is the minimum number of scribbles necessary and where they should be placed in an image remain unexplored. Here we address this limitation using an iterative algorithm that provides insights as to the relationship between locally vs. globally important scribbles. Given a color image we randomly select scribbles and we attempt to color the grayscale version of the original.We define a scribble contribution measure based on the reconstruction error. We demonstrate our approach using a widely used colorization algorithm and images from a Picasso painting and the peppers test image. We show that areas isolated by thick brushstrokes or areas with high textural variation are locally important but contribute very little to the overall representation accuracy. We also find that for the case of Picasso on average 10% of scribble coverage is enough and that flat areas can be presented by few scribbles. The proposed method can be used verbatim to test any colorization algorithm

    Radial basis functions versus geostatistics in spatial interpolations

    Get PDF
    A key problem in environmental monitoring is the spatial interpolation. The main current approach in spatial interpolation is geostatistical. Geostatistics is neither the only nor the best spatial interpolation method. Actually there is no “best” method, universally valid. Choosing a particular method implies to make assumptions. The understanding of initial assumption, of the methods used, and the correct interpretation of the interpolation results are key elements of the spatial interpolation process. A powerful alternative to geostatistics in spatial interpolation is the use of the soft computing methods. They offer the potential for a more flexible, less assumption dependent approach. Artificial Neural Networks are well suited for this kind of problems, due to their ability to handle non-linear, noisy, and inconsistent data. The present paper intends to prove the advantage of using Radial Basis Functions (RBF) instead of geostatistics in spatial interpolations, based on a detailed analyze and modeling of the SIC2004 (Spatial Interpolation Comparison) dataset.IFIP International Conference on Artificial Intelligence in Theory and Practice - Neural NetsRed de Universidades con Carreras en Informática (RedUNCI

    Radial basis functions versus geostatistics in spatial interpolations

    Get PDF
    A key problem in environmental monitoring is the spatial interpolation. The main current approach in spatial interpolation is geostatistical. Geostatistics is neither the only nor the best spatial interpolation method. Actually there is no “best” method, universally valid. Choosing a particular method implies to make assumptions. The understanding of initial assumption, of the methods used, and the correct interpretation of the interpolation results are key elements of the spatial interpolation process. A powerful alternative to geostatistics in spatial interpolation is the use of the soft computing methods. They offer the potential for a more flexible, less assumption dependent approach. Artificial Neural Networks are well suited for this kind of problems, due to their ability to handle non-linear, noisy, and inconsistent data. The present paper intends to prove the advantage of using Radial Basis Functions (RBF) instead of geostatistics in spatial interpolations, based on a detailed analyze and modeling of the SIC2004 (Spatial Interpolation Comparison) dataset.IFIP International Conference on Artificial Intelligence in Theory and Practice - Neural NetsRed de Universidades con Carreras en Informática (RedUNCI
    corecore